191 research outputs found
Realtime Profiling of Fine-Grained Air Quality Index Distribution using UAV Sensing
Given significant air pollution problems, air quality index (AQI) monitoring
has recently received increasing attention. In this paper, we design a mobile
AQI monitoring system boarded on unmanned-aerial-vehicles (UAVs), called ARMS,
to efficiently build fine-grained AQI maps in realtime. Specifically, we first
propose the Gaussian plume model on basis of the neural network (GPM-NN), to
physically characterize the particle dispersion in the air. Based on GPM-NN, we
propose a battery efficient and adaptive monitoring algorithm to monitor AQI at
the selected locations and construct an accurate AQI map with the sensed data.
The proposed adaptive monitoring algorithm is evaluated in two typical
scenarios, a two-dimensional open space like a roadside park, and a
three-dimensional space like a courtyard inside a building. Experimental
results demonstrate that our system can provide higher prediction accuracy of
AQI with GPM-NN than other existing models, while greatly reducing the power
consumption with the adaptive monitoring algorithm
Enhancing Space-time Video Super-resolution via Spatial-temporal Feature Interaction
The target of space-time video super-resolution (STVSR) is to increase both
the frame rate (also referred to as the temporal resolution) and the spatial
resolution of a given video. Recent approaches solve STVSR with end-to-end deep
neural networks. A popular solution is to first increase the frame rate of the
video; then perform feature refinement among different frame features; and last
increase the spatial resolutions of these features. The temporal correlation
among features of different frames is carefully exploited in this process. The
spatial correlation among features of different (spatial) resolutions, despite
being also very important, is however not emphasized. In this paper, we propose
a spatial-temporal feature interaction network to enhance STVSR by exploiting
both spatial and temporal correlations among features of different frames and
spatial resolutions. Specifically, the spatial-temporal frame interpolation
module is introduced to interpolate low- and high-resolution intermediate frame
features simultaneously and interactively. The spatial-temporal local and
global refinement modules are respectively deployed afterwards to exploit the
spatial-temporal correlation among different features for their refinement.
Finally, a novel motion consistency loss is employed to enhance the motion
continuity among reconstructed frames. We conduct experiments on three standard
benchmarks, Vid4, Vimeo-90K and Adobe240, and the results demonstrate that our
method improves the state of the art methods by a considerable margin. Our
codes will be available at
https://github.com/yuezijie/STINet-Space-time-Video-Super-resolution
On Positional and Structural Node Features for Graph Neural Networks on Non-attributed Graphs
Graph neural networks (GNNs) have been widely used in various graph-related
problems such as node classification and graph classification, where the
superior performance is mainly established when natural node features are
available. However, it is not well understood how GNNs work without natural
node features, especially regarding the various ways to construct artificial
ones. In this paper, we point out the two types of artificial node
features,i.e., positional and structural node features, and provide insights on
why each of them is more appropriate for certain tasks,i.e., positional node
classification, structural node classification, and graph classification.
Extensive experimental results on 10 benchmark datasets validate our insights,
thus leading to a practical guideline on the choices between different
artificial node features for GNNs on non-attributed graphs. The code is
available at https://github.com/zjzijielu/gnn-exp/.Comment: This paper has been accepted to the Sixth International Workshop on
Deep Learning on Graphs (DLG-KDD'21) (co-located with KDD'21
Constrained Clustering Based on the Link Structure of a Directed Graph
In many segmentation applications, data objects are often clustered based purely on attribute-level similarities. This practice has neglected the useful information that resides in the link structure among data objects and the valuable expert domain knowledge about the desirable cluster assignment. Link structure can carry worthy information about the similarity between data objects (e.g. citation), and we should also incorporate the existing domain information on preferred outcome when segmenting data. In this paper, we investigate the segmentation problem combining these three sources of information, which has not been addressed in the existing literature. We propose a segmentation method for directed graphs that incorporates the attribute values, link structure and expert domain information (represented as constraints). The proposed method combines these three types of information to achieve good quality segmentation on data which can be represented as a directed graph. We conducted comprehensive experiments to evaluate various aspects of our approach and demonstrate the effectiveness of our method
MedChatZH: a Better Medical Adviser Learns from Better Instructions
Generative large language models (LLMs) have shown great success in various
applications, including question-answering (QA) and dialogue systems. However,
in specialized domains like traditional Chinese medical QA, these models may
perform unsatisfactorily without fine-tuning on domain-specific datasets. To
address this, we introduce MedChatZH, a dialogue model designed specifically
for traditional Chinese medical QA. Our model is pre-trained on Chinese
traditional medical books and fine-tuned with a carefully curated medical
instruction dataset. It outperforms several solid baselines on a real-world
medical dialogue dataset. We release our model, code, and dataset on
https://github.com/tyang816/MedChatZH to facilitate further research in the
domain of traditional Chinese medicine and LLMs.Comment: 7 pages, 3 figure
A multi-continuum model for simulating in-situ conversion process in low-medium maturity shale oil reservoir
In-situ conversion is proposed applicable for low-medium maturity shale oil reservoir. However, parallel chemical kinetic reactions and evolution of shale pores during in-situ conversion make the numerical simulation a challenging problem. Although shale is typical multiscale and heterogeneous media, few models in previous studies take the difference between organic and inorganic system into consideration, which cannot simulate fluid flow accurately. In this paper, a multi-continuum model, considering coupled thermal-reactive compositional flow, is developed to simulate in-situ conversion process in low-medium maturity shale oil reservoir. The reaction of kerogen and hydrocarbon is quantified using kinetic reaction model. The evolution of fluid composition and shale properties are also incorporated. The accuracy of multiple-interacting-continua model and compositional model are demonstrated by comparing with commercial software and analytical solution. Then, the typical hexagon vertical well heating pattern is simulated and the feasibility is evaluated from an economic aspect. Finally, a series of case studies are conducted to investigate the impact of operation parameters on shale oil production.Cited as: Wang, Z., Yao, J., Sun, H, Yan, X., Yang, Y. A multi-continuum model for simulating in-situ conversion process in low-medium maturity shale oil reservoir. Advances in Geo-Energy Research, 2021, 5(4): 456-464, doi: 10.46690/ager.2021.04.1
Towards Automatic Boundary Detection for Human-AI Hybrid Essay in Education
Human-AI collaborative writing has been greatly facilitated with the help of
modern large language models (LLM), e.g., ChatGPT. While admitting the
convenience brought by technology advancement, educators also have concerns
that students might leverage LLM to partially complete their writing assignment
and pass off the human-AI hybrid text as their original work. Driven by such
concerns, in this study, we investigated the automatic detection of Human-AI
hybrid text in education, where we formalized the hybrid text detection as a
boundary detection problem, i.e., identifying the transition points between
human-written content and AI-generated content. We constructed a hybrid essay
dataset by partially removing sentences from the original student-written
essays and then instructing ChatGPT to fill in for the incomplete essays. Then
we proposed a two-step detection approach where we (1) Separated AI-generated
content from human-written content during the embedding learning process; and
(2) Calculated the distances between every two adjacent prototypes (a prototype
is the mean of a set of consecutive sentences from the hybrid text in the
embedding space) and assumed that the boundaries exist between the two
prototypes that have the furthest distance from each other. Through extensive
experiments, we summarized the following main findings: (1) The proposed
approach consistently outperformed the baseline methods across different
experiment settings; (2) The embedding learning process (i.e., step 1) can
significantly boost the performance of the proposed approach; (3) When
detecting boundaries for single-boundary hybrid essays, the performance of the
proposed approach could be enhanced by adopting a relatively large prototype
size, leading to a \% improvement (against the second-best baseline method)
in the in-domain setting and an \% improvement in the out-of-domain
setting.Comment: 9 pages including references, 2 figure
- …